Goto

Collaborating Authors

 true self-driving car


AI Ethics And AI Law Clarifying What In Fact Is Trustworthy AI

#artificialintelligence

Will we be able to achieve trustworthy AI, and if so, how. Trust is everything, so they say. The noted philosopher Lao Tzu said that those who do not trust enough will not be trusted. Ernest Hemingway, an esteemed novelist, stated that the best way to find out if you can trust somebody is by trusting them. Meanwhile, it seems that trust is both precious and brittle. The trust that one has can collapse like a house of cards or suddenly burst like a popped balloon. The ancient Greek tragedian Sophocles asserted that trust dies but mistrust blossoms. French philosopher and mathematician Descartes contended that it is prudent never to trust wholly those who have deceived us even once. Billionaire business investor extraordinaire Warren Buffett exhorted that it takes twenty years to build a trustworthy reputation and five minutes to ruin it. You might be surprised to know that all of these varied views and provocative opinions about trust are crucial to the advent of Artificial Intelligence (AI). Yes, there is something keenly referred to as trustworthy AI that keeps getting a heck of a lot of attention these days, including handwringing catcalls from within the field of AI and also boisterous outbursts by those outside of the AI realm. The overall notion entails whether or not society is going to be willing to place trust in the likes of AI systems. Presumably, if society won't or can't trust AI, the odds are that AI systems will fail to get traction.


AI Ethics And The Quest For Self-Awareness In AI

#artificialintelligence

Giving heavy thought to AI and self-awareness, combined with ethical behavior and AI ethics. I'd bet that you believe you are. The thing is, supposedly, few of us are especially self-aware. There is a range or degree of self-awareness and we all purportedly vary in how astutely self-aware we are. You might think you are fully self-aware and only be marginally so. You might be thinly self-aware and realize that's your mental state. Meanwhile, at the topmost part of the spectrum, you might believe you are fully self-aware and indeed are frankly about as self-aware as they come. Speaking of which, what good does it do to be exceedingly self-aware? According to research published in the Harvard Business Review (HBR) by Tasha Eurich, you reportedly are able to make better decisions, you are more confident in your decisions, you are stronger in your communication capacities, and more effective overall (per article entitled "What Self-Awareness Really Is (and How to Cultivate It)." The bonus factor is that those with strident self-awareness are said to be less inclined to cheat, steal, or lie. In that sense, there is a twofer of averting being a scoundrel or a crook, along with striving to be a better human being and embellish your fellow humankind. All of this talk about self-awareness brings up a somewhat obvious question, namely, what does the phrase self-awareness actually denote. You can readily find tons of various definitions and interpretations about the complex and shall we say mushy construct entailing being self-aware. Some would simplify matters by suggesting that self-awareness consists of monitoring your own self, knowing what yourself is up to. You are keenly aware of your own thoughts and actions. Presumably, when not being self-aware, a person would not realize what they are doing, nor why so, and also not be cognizant of what other people have to say about them. I'm sure you've met people that are like this. Some people appear to walk this earth without a clue of what they themselves are doing, and nor do they have a semblance of what others are saying about them.


AI Ethics Saying That AI Should Be Especially Deployed When Human Biases Are Aplenty

#artificialintelligence

Trying to overcome human untoward biases by replacing with AI is not as straightforward as it might ... [ ] seem. Humans have got to know their limitations. You might recall the akin famous line about knowing our limitations as grittily uttered by the character Dirty Harry in the 1973 movie entitled Magnum Force (per the spoken words of actor Clint Eastwood in his memorable role as Inspector Harry Callahan). The overall notion is that sometimes we tend to overlook our own limits and get ourselves into hot water accordingly. Whether due to hubris, being egocentric, or simply blind to our own capabilities, the precept of being aware of and taking into explicit account our proclivities and shortcomings is abundantly sensible and helpful. Let's add a new twist to the sage piece of advice. Artificial Intelligence (AI) has got to know its limitations. What do I mean by that variant of the venerated catchphrase? Turns out that the initial rush to get modern-day AI into use as a hopeful solver of the world's problems has become sullied and altogether muddied by the realization that today's AI does have some rather severe limitations. We went from the uplifting headlines of AI For Good and have increasingly found ourselves mired in AI For Bad. You see, many AI systems have been developed and fielded with all sorts of untoward racial and gender biases, and a myriad of other such appalling inequities.


AI Ethics Flummoxed By Those Salting AI Ethicists That "Instigate" Ethical AI Practices

#artificialintelligence

Is it okay or is it questionable for those salting AI Ethicists that seek to get hired by a firm ... [ ] solely to from-within stoke Ethical AI precepts? Salting has been in the news quite a bit lately. I am not referring to the salt that you put into your food. Instead, I am bringing up the "salting" that is associated with a provocative and seemingly highly controversial practice associated with the interplay between labor and business. You see, this kind of salting entails the circumstance whereby a person tries to get hired into a firm to ostensibly initiate or some might arguably say instigate the establishment of a labor union therein. I will cover first the basics of salting and then will switch to an akin topic that you might be quite caught off-guard about, namely that there seems to be a kind of salting taking place in the field of Artificial Intelligence (AI). This has crucial AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. Now, let's get into the fundamentals of how salting typically works. Suppose that a company does not have any unions in its labor force. One means would be to take action outside of the company and try to appeal to the workers that they should join a union. This might involve showcasing banners nearby to the company headquarters or sending the workers flyers or utilizing social media, and so on. This is a decidedly outside-in type of approach. Another avenue would be to spur from within a spark that might get the ball rolling.


AI Startups Finally Getting Onboard With AI Ethics And Loving It, Including Those Newbie Autonomous Self-Driving Car Tech Firms Too

#artificialintelligence

AI startups are increasingly embracing AI ethics, though this is trickier than it might seem at ... [ ] first glance. Whatever you are thinking, think bigger. Fake it until you make it. These are the typical startup lines that you hear or see all the time. They have become a kind of advisory lore amongst budding entrepreneurs. If you wander around Silicon Valley, you'll probably see bumper stickers with those slogans and likely witness high-tech founders wearing hoodies emblazoned with such tropes. AI-related startups are assuredly included in the bunch. Perhaps we might though add an additional piece of startup success advice for the AI aiming nascent firms, namely that they should energetically embrace AI ethics. That is a bumper sticker-worthy notion and assuredly a useful piece of sage wisdom for any AI founder that is trying to figure out how they can be a proper leader and a winning entrepreneur. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. The first impulse of many AI startups is likely the exact opposite of wanting to embrace AI ethics. Often, the focus of an AI startup is primarily about getting some tangible AI system out the door as quickly as possible. There is usually tremendous pressure to produce an MVP (minimally viable product). Investors are skittish about putting money into some newfangled AI contrivance that might not be buildable, and therefore the urgency to craft an AI pilot or prototype is paramount.



Democratization Of AI Is Said To Be Essential For AI Ethics But The Devil Is In The Details, Including The Case Of AI-Based Self-Driving Cars

#artificialintelligence

A big push is underway to democratize AI, though we need to figure out what this actually means and ... [ ] what it foretells. That phrasing is a well-intended respectful appropriation from Abraham Lincoln's famous 1863 Gettysburg Address in which he memorably stated that our democracy proffers a new birth of freedom entwining an erstwhile government of the people, by the people, and for the people. This same notion of the power of people was also captured in some years earlier speech by Senator Daniel Webster in 1830 in which he exhorted that our government was made for the people, made by the people, and answerable to the people. You could readily assert that those keystone elements are a bedrock of democracy and democratization. The reason that I've leveraged such a famed saying is it seemingly can be purposely applied to Artificial Intelligence. You see, there is a great deal of vocal discussion these days about the democratization of AI. In brief, there is a heartfelt belief that we need to make sure that pretty much anybody at all can craft and deploy AI systems. The ardent view is that we are presently mired in having only techie-focused specialists that can put AI into use. A relatively narrow and devoted clique of AI experts and high-tech entities are dominating which AI systems we are getting and how those AI systems are being devised, the argument goes. We, the people, cannot allow ourselves to become controlled and overseen by a seeming handful of AI gurus. I think you can probably see how there is a claimed analogous pattern between the notion of how people are governed overall and how AI is being fostered upon the world.


Ethical AI Ambitiously Hoping To Have AI Learn Ethical Behavior By Itself, Such As The Case With AI In Autonomous Self-Driving Cars

#artificialintelligence

Can AI learn ethical precepts on its own? Aristotle famously stated that educating the mind without educating the heart is no education at all. You could interpret that insightful remark to suggest that learning about ethics and moral behavior is keenly vital for humankind. In the classic nature versus nurture debate, one must ask how much of our ethical mores are instinctively native while how much is learned over the course of our living days. Toddlers are observant of fellow humans and presumably glean their ethical foundations based on what they see and hear. The same can be said of teenagers. For open-minded adults, they too will continue to adjust and progress in their ethical thinking as a result of experiencing the everyday world. Of course, explicitly teaching someone about ethics is also par for the course. People are bound to learn about ethical ways via attending classes on the topic or perhaps by going to events and practices of interest to them. Ethical values can be plainly identified and shared as a means to aid others in formulating their own structure of ethics. In addition, ethics might be subtly hidden within stories or other instructional modes that ultimately carry a message of what ethical behavior consists of. That's how humans seem to imbue ethics. I realize such a question might seem oddish. We certainly expect humans to incorporate ethics and walk through life with some semblance of a moral code. It is a simple and obvious fact.


The Ethical AI Question Of Whether Self-Driving Cars Ought To Be A Good Samaritan And Forewarn When Human-Driven Cars Are Going To Crash Into Each Other

#artificialintelligence

When driving, mind your own business or help other drivers, that's the question to be pondered. That's also a common refrain and refers to the notion of being helpful to others, even though they might be complete strangers and you do not know them at all. Which of those two catchphrases or words of wisdom would you choose? You probably make daily decisions about those two possibilities. There are situations and settings wherein you opt to mind your own business. At times, it might be quite tempting to step into the middle of something, but you weigh the pros and cons of doing so, and then at times move along and do not get into the fray. On the other hand, there are times that you decide it is best to jump into the swimming pool, as it were, and get engaged. Let's turn this somewhat conceptual or philosophical discussion into something very grounded and real. I was driving my car the other day and had come up to an intersection to make a left turn.


The Ethical Debate About Whether AI Ought To Warn You When The Self-Driving Car That You Are Riding In Is About To Crash

#artificialintelligence

Considering whether AI ought to warn human passengers about an impending crash or collision. We've all likely had our share of car crashes over the years. Let's trace the various published research underlying a somewhat simple but altogether crucial question, namely if you know that a crash is about to occur should you go limp or attempt to tighten and brace yourself. Turns out that the answer is complicated and often dependent upon the circumstances at hand. First, there is a popular assumption that you ought to let your body go loose or limp when an impending car crash is about to occur. Some claim that this ragdoll posturing will be advantageous. The purported logic is that we all know that a straight and narrow stick will presumably break and snap entirely when placed under intense pressure. As such, if you tense up, you are risking all manner of personal bodily damage. According to the sage wisdom of Confucius: "The reed which bends in the wind is stronger than the mighty oak which breaks in a storm." I don't believe though that Confucius had an opportunity to drive or ride in an automobile (he lived from 551 BC to 479 BC, while cars were essentially invented in the late 1880s).